本文介绍了一种新的鉴别器受约束的最佳运输网络(DOTN),其对语音增强(SE)执行无监督的域适应,这是语音处理中的重要回归任务。 DOTN旨在通过利用来自源域的知识来估计目标域中噪声语音的清洁参考。据报道,培训和测试数据之间的域移位是在不同领域中学习问题的障碍。虽然丰富的文献存在于对分类的无监督域适应上,但提出的方法,尤其是回归,仍然稀缺,并且通常取决于输入数据的附加信息。所提出的DOTN方法通过生成的对抗性框架来说,从数学分析中统治最佳运输(OT)理论,以帮助评估目标域中的连续标签。在两个SE任务上的实验结果表明,通过延长经典OT制剂,我们提出的DOTN以纯粹无监督的方式优于先前的对抗域适应框架。
translated by 谷歌翻译
Diffusion models have achieved justifiable popularity by attaining state-of-the-art performance in generating realistic objects from seemingly arbitrarily complex data distributions, including when conditioning generation on labels. Unfortunately, however, their iterative nature renders them very computationally inefficient during the sampling process. For the multi-class conditional generation problem, we propose a novel, structurally unique framework of diffusion models which are hierarchically branched according to the inherent relationships between classes. In this work, we demonstrate that branched diffusion models offer major improvements in efficiently generating samples from multiple classes. We also showcase several other advantages of branched diffusion models, including ease of extension to novel classes in a continual-learning setting, and a unique interpretability that offers insight into these generative models. Branched diffusion models represent an alternative paradigm to their traditional linear counterparts, and can have large impacts in how we use diffusion models for efficient generation, online learning, and scientific discovery.
translated by 谷歌翻译
We develop a Synthetic Fusion Pyramid Network (SPF-Net) with a scale-aware loss function design for accurate crowd counting. Existing crowd-counting methods assume that the training annotation points were accurate and thus ignore the fact that noisy annotations can lead to large model-learning bias and counting error, especially for counting highly dense crowds that appear far away. To the best of our knowledge, this work is the first to properly handle such noise at multiple scales in end-to-end loss design and thus push the crowd counting state-of-the-art. We model the noise of crowd annotation points as a Gaussian and derive the crowd probability density map from the input image. We then approximate the joint distribution of crowd density maps with the full covariance of multiple scales and derive a low-rank approximation for tractability and efficient implementation. The derived scale-aware loss function is used to train the SPF-Net. We show that it outperforms various loss functions on four public datasets: UCF-QNRF, UCF CC 50, NWPU and ShanghaiTech A-B datasets. The proposed SPF-Net can accurately predict the locations of people in the crowd, despite training on noisy training annotations.
translated by 谷歌翻译
The increased importance of mobile photography created a need for fast and performant RAW image processing pipelines capable of producing good visual results in spite of the mobile camera sensor limitations. While deep learning-based approaches can efficiently solve this problem, their computational requirements usually remain too large for high-resolution on-device image processing. To address this limitation, we propose a novel PyNET-V2 Mobile CNN architecture designed specifically for edge devices, being able to process RAW 12MP photos directly on mobile phones under 1.5 second and producing high perceptual photo quality. To train and to evaluate the performance of the proposed solution, we use the real-world Fujifilm UltraISP dataset consisting on thousands of RAW-RGB image pairs captured with a professional medium-format 102MP Fujifilm camera and a popular Sony mobile camera sensor. The results demonstrate that the PyNET-V2 Mobile model can substantially surpass the quality of tradition ISP pipelines, while outperforming the previously introduced neural network-based solutions designed for fast image processing. Furthermore, we show that the proposed architecture is also compatible with the latest mobile AI accelerators such as NPUs or APUs that can be used to further reduce the latency of the model to as little as 0.5 second. The dataset, code and pre-trained models used in this paper are available on the project website: https://github.com/gmalivenko/PyNET-v2
translated by 谷歌翻译
A new and efficient neural-network and finite-difference hybrid method is developed for solving Poisson equation in a regular domain with jump discontinuities on embedded irregular interfaces. Since the solution has low regularity across the interface, when applying finite difference discretization to this problem, an additional treatment accounting for the jump discontinuities must be employed. Here, we aim to elevate such an extra effort to ease our implementation by machine learning methodology. The key idea is to decompose the solution into singular and regular parts. The neural network learning machinery incorporating the given jump conditions finds the singular solution, while the standard finite difference method is used to obtain the regular solution with associated boundary conditions. Regardless of the interface geometry, these two tasks only require supervised learning for function approximation and a fast direct solver for Poisson equation, making the hybrid method easy to implement and efficient. The two- and three-dimensional numerical results show that the present hybrid method preserves second-order accuracy for the solution and its derivatives, and it is comparable with the traditional immersed interface method in the literature. As an application, we solve the Stokes equations with singular forces to demonstrate the robustness of the present method.
translated by 谷歌翻译
为了以低成本的自动驾驶成本实现准确的3D对象检测,已经提出了许多多摄像机方法并解决了单眼方法的闭塞问题。但是,由于缺乏准确的估计深度,现有的多摄像机方法通常会沿着深度方向产生多个边界框,例如行人等困难的小物体,从而产生极低的召回。此外,将深度预测模块直接应用于通常由大型网络体系结构组成的现有多摄像机方法,无法满足自动驾驶应用程序的实时要求。为了解决这些问题,我们提出了3D对象检测的跨视图和深度引导的变压器,CrossDTR。首先,我们的轻质深度预测器旨在生成精确的对象稀疏深度图和低维深度嵌入,而在监督过程中,无需额外的深度数据集。其次,开发了一个跨视图引导的变压器,以融合深度嵌入以及来自不同视图的相机的图像特征并生成3D边界框。广泛的实验表明,我们的方法在行人检测中大大超过了10%,总体图和NDS指标中约为3%。同样,计算分析表明,我们的方法比以前的方法快5倍。我们的代码将在https://github.com/sty61010/crossdtr上公开提供。
translated by 谷歌翻译
高度期望可以通过视觉信号执行复杂任务并与人合作执行复杂任务的空间AI。为了实现这一目标,我们需要一个视觉大满贯,该猛击很容易适应新场景而无需预训练,并为实时的下游任务生成密集的地图。由于其组件的固有局限性,先前基于学习和非学习的视觉大满贯都不满足所有需求。在这项工作中,我们开发了一个名为Orbeez-Slam的视觉猛烈抨击,该作品成功地与隐式神经表示(NERF)和视觉探测仪合作以实现我们的目标。此外,Orbeez-Slam可以与单眼相机一起使用,因为它只需要RGB输入,从而广泛适用于现实世界。我们验证其对各种具有挑战性的基准的有效性。结果表明,我们的大满贯速度比强大的渲染结果快800倍。
translated by 谷歌翻译
神经辐射场(NERF)的最新进展实现了最新的新型视图合成,并促进了场景特性的密集估计。但是,在非常稀疏的视图下捕获的大型无界场景通常会失败,而场景内容集中在远离相机的情况下,这是典型的现场机器人应用程序。特别是,NERF风格的算法的性能很差:(1)当视图不足而呈姿势多样性的情况不足时,(2)当场景包含饱和度和阴影时,以及(3)当对具有精细结构的大型无界场景进行精心采样时,计算中就会大量强度。本文提出了克隆器,它通过允许从稀疏输入传感器视图中观察到的大型户外驾驶场景来对NERF进行显着改善。这是通过将NERF框架内的占用和颜色学习分离成分别使用LIDAR和相机数据训练的单独的多层感知器(MLP)来实现的。此外,本文提出了一种新的方法,可以在NERF模型旁边构建可区分的3D占用网格图(OGM),并利用此占用网格来改进沿射线的点采样,以在度量空间中进行体积渲染。通过在Kitti数据集的场景上进行的广泛定量和定性实验,本文表明,在新的视图合成和密集的深度预测任务上对稀疏输入数据培训时,所提出的方法在新型视图合成和密集的深度预测任务上都优于最先进的NERF模型。
translated by 谷歌翻译
从手绘中生成图像是内容创建的至关重要和基本任务。翻译很困难,因为存在无限的可能性,并且不同的用户通常会期望不同的结果。因此,我们提出了一个统一的框架,该框架支持基于扩散模型的草图和笔触对图像合成的三维控制。用户不仅可以确定输入笔画和草图的忠诚程度,而且还可以确定现实程度,因为用户输入通常与真实图像不一致。定性和定量实验表明,我们的框架实现了最新的性能,同时提供了具有控制形状,颜色和现实主义的自定义图像的灵活性。此外,我们的方法释放了应用程序,例如在真实图像上编辑,部分草图和笔触的生成以及多域多模式合成。
translated by 谷歌翻译
视频检索随着视觉模型的发展取得了巨大进展。但是,进一步改进这些模型需要其他标记的数据,这是一项巨大的手动努力。在本文中,我们提出了一个框架MKTVR,该框架利用了从多语言模型的知识转移来提高视频检索的性能。我们首先使用最先进的机器翻译模型来构建伪真实的多语言视频文本对。然后,我们使用这些数据来学习视频文本表示,其中英语和非英语文本查询在基于预审前的多语言模型的常见嵌入空间中表示。我们在四个英语视频检索数据集上评估了我们提出的方法,例如MSRVTT,MSVD,DIDEMO和CHARADES。实验结果表明,我们的方法在所有数据集上实现了最先进的结果,超过了先前的模型。最后,我们还在涵盖六种语言的多语言视频回程数据集上评估了我们的模型,并表明我们的模型在零拍设置中优于先前的多语言视频检索模型。
translated by 谷歌翻译